42 research outputs found

    Linear Programming Bounds for Randomly Sampling Colorings

    Full text link
    Here we study the problem of sampling random proper colorings of a bounded degree graph. Let kk be the number of colors and let dd be the maximum degree. In 1999, Vigoda showed that the Glauber dynamics is rapidly mixing for any k>116dk > \frac{11}{6} d. It turns out that there is a natural barrier at 116\frac{11}{6}, below which there is no one-step coupling that is contractive, even for the flip dynamics. We use linear programming and duality arguments to guide our construction of a better coupling. We fully characterize the obstructions to going beyond 116\frac{11}{6}. These examples turn out to be quite brittle, and even starting from one, they are likely to break apart before the flip dynamics changes the distance between two neighboring colorings. We use this intuition to design a variable length coupling that shows that the Glauber dynamics is rapidly mixing for any k(116ϵ0)dk\ge \left(\frac{11}{6} - \epsilon_0\right)d where ϵ09.4105\epsilon_0 \geq 9.4 \cdot 10^{-5}. This is the first improvement to Vigoda's analysis that holds for general graphs.Comment: 30 pages, 3 figures; fixed some typo

    Beyond the Low-Degree Algorithm: Mixtures of Subcubes and Their Applications

    Full text link
    We introduce the problem of learning mixtures of kk subcubes over {0,1}n\{0,1\}^n, which contains many classic learning theory problems as a special case (and is itself a special case of others). We give a surprising nO(logk)n^{O(\log k)}-time learning algorithm based on higher-order multilinear moments. It is not possible to learn the parameters because the same distribution can be represented by quite different models. Instead, we develop a framework for reasoning about how multilinear moments can pinpoint essential features of the mixture, like the number of components. We also give applications of our algorithm to learning decision trees with stochastic transitions (which also capture interesting scenarios where the transitions are deterministic but there are latent variables). Using our algorithm for learning mixtures of subcubes, we can approximate the Bayes optimal classifier within additive error ϵ\epsilon on kk-leaf decision trees with at most ss stochastic transitions on any root-to-leaf path in nO(s+logk)poly(1/ϵ)n^{O(s + \log k)}\cdot\text{poly}(1/\epsilon) time. In this stochastic setting, the classic Occam algorithms for learning decision trees with zero stochastic transitions break down, while the low-degree algorithm of Linial et al. inherently has a quasipolynomial dependence on 1/ϵ1/\epsilon. In contrast, as we will show, mixtures of kk subcubes are uniquely determined by their degree 2logk2 \log k moments and hence provide a useful abstraction for simultaneously achieving the polynomial dependence on 1/ϵ1/\epsilon of the classic Occam algorithms for decision trees and the flexibility of the low-degree algorithm in being able to accommodate stochastic transitions. Using our multilinear moment techniques, we also give the first improved upper and lower bounds since the work of Feldman et al. for the related but harder problem of learning mixtures of binary product distributions.Comment: 62 pages; to appear in STOC 201

    A faster and simpler algorithm for learning shallow networks

    Full text link
    We revisit the well-studied problem of learning a linear combination of kk ReLU activations given labeled examples drawn from the standard dd-dimensional Gaussian measure. Chen et al. [CDG+23] recently gave the first algorithm for this problem to run in poly(d,1/ε)\text{poly}(d,1/\varepsilon) time when k=O(1)k = O(1), where ε\varepsilon is the target error. More precisely, their algorithm runs in time (d/ε)quasipoly(k)(d/\varepsilon)^{\mathrm{quasipoly}(k)} and learns over multiple stages. Here we show that a much simpler one-stage version of their algorithm suffices, and moreover its runtime is only (d/ε)O(k2)(d/\varepsilon)^{O(k^2)}.Comment: 14 page

    Futility and utility of a few ancillas for Pauli channel learning

    Full text link
    In this paper we revisit one of the prototypical tasks for characterizing the structure of noise in quantum devices, estimating the eigenvalues of an nn-qubit Pauli noise channel. Prior work (Chen et al., 2022) established exponential lower bounds for this task for algorithms with limited quantum memory. We first improve upon their lower bounds and show: (1) Any algorithm without quantum memory must make Ω(2n/ϵ2)\Omega(2^n/\epsilon^2) measurements to estimate each eigenvalue within error ϵ\epsilon. This is tight and implies the randomized benchmarking protocol is optimal, resolving an open question of (Flammia and Wallman, 2020). (2) Any algorithm with k\le k ancilla qubits of quantum memory must make Ω(2(nk)/3)\Omega(2^{(n-k)/3}) queries to the unknown channel. Crucially, unlike in (Chen et al., 2022), our bound holds even if arbitrary adaptive control and channel concatenation are allowed. In fact these lower bounds, like those of (Chen et al., 2022), hold even for the easier hypothesis testing problem of determining whether the underlying channel is completely depolarizing or has exactly one other nontrivial eigenvalue. Surprisingly, we show that: (3) With only k=2k=2 ancilla qubits of quantum memory, there is an algorithm that solves this hypothesis testing task with high probability using a single measurement. Note that (3) does not contradict (2) as the protocol concatenates exponentially many queries to the channel before the measurement. This result suggests a novel mechanism by which channel concatenation and O(1)O(1) qubits of quantum memory could work in tandem to yield striking speedups for quantum process learning that are not possible for quantum state learning.Comment: 35 pages, 1 figur
    corecore